84 research outputs found

    Autonomy, Standing, and Children\u27s Rights

    Get PDF

    Autonomy, Standing, and Children\u27s Rights

    Get PDF

    Standardization of electroencephalography for multi-site, multi-platform and multi-investigator studies: Insights from the canadian biomarker integration network in depression

    Get PDF
    Subsequent to global initiatives in mapping the human brain and investigations of neurobiological markers for brain disorders, the number of multi-site studies involving the collection and sharing of large volumes of brain data, including electroencephalography (EEG), has been increasing. Among the complexities of conducting multi-site studies and increasing the shelf life of biological data beyond the original study are timely standardization and documentation of relevant study parameters. We presentthe insights gained and guidelines established within the EEG working group of the Canadian Biomarker Integration Network in Depression (CAN-BIND). CAN-BIND is a multi-site, multi-investigator, and multiproject network supported by the Ontario Brain Institute with access to Brain-CODE, an informatics platform that hosts a multitude of biological data across a growing list of brain pathologies. We describe our approaches and insights on documenting and standardizing parameters across the study design, data collection, monitoring, analysis, integration, knowledge-translation, and data archiving phases of CAN-BIND projects. We introduce a custom-built EEG toolbox to track data preprocessing with open-access for the scientific community. We also evaluate the impact of variation in equipment setup on the accuracy of acquired data. Collectively, this work is intended to inspire establishing comprehensive and standardized guidelines for multi-site studies

    Comparison of quality control methods for automated diffusion tensor imaging analysis pipelines

    Get PDF
    © 2019 Haddad et al. The processing of brain diffusion tensor imaging (DTI) data for large cohort studies requires fully automatic pipelines to perform quality control (QC) and artifact/outlier removal procedures on the raw DTI data prior to calculation of diffusion parameters. In this study, three automatic DTI processing pipelines, each complying with the general ENIGMA framework, were designed by uniquely combining multiple image processing software tools. Different QC procedures based on the RESTORE algorithm, the DTIPrep protocol, and a combination of both methods were compared using simulated ground truth and artifact containing DTI datasets modeling eddy current induced distortions, various levels of motion artifacts, and thermal noise. Variability was also examined in 20 DTI datasets acquired in subjects with vascular cognitive impairment (VCI) from the multi-site Ontario Neurodegenerative Disease Research Initiative (ONDRI). The mean fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) were calculated in global brain grey matter (GM) and white matter (WM) regions. For the simulated DTI datasets, the measure used to evaluate the performance of the pipelines was the normalized difference between the mean DTI metrics measured in GM and WM regions and the corresponding ground truth DTI value. The performance of the proposed pipelines was very similar, particularly in FA measurements. However, the pipeline based on the RESTORE algorithm was the most accurate when analyzing the artifact containing DTI datasets. The pipeline that combined the DTIPrep protocol and the RESTORE algorithm produced the lowest standard deviation in FA measurements in normal appearing WM across subjects. We concluded that this pipeline was the most robust and is preferred for automated analysis of multisite brain DTI data

    Multisite Comparison of MRI Defacing Software Across Multiple Cohorts

    Get PDF
    With improvements to both scan quality and facial recognition software, there is an increased risk of participants being identified by a 3D render of their structural neuroimaging scans, even when all other personal information has been removed. To prevent this, facial features should be removed before data are shared or openly released, but while there are several publicly available software algorithms to do this, there has been no comprehensive review of their accuracy within the general population. To address this, we tested multiple algorithms on 300 scans from three neuroscience research projects, funded in part by the Ontario Brain Institute, to cover a wide range of ages (3–85 years) and multiple patient cohorts. While skull stripping is more thorough at removing identifiable features, we focused mainly on defacing software, as skull stripping also removes potentially useful information, which may be required for future analyses. We tested six publicly available algorithms (afni_refacer, deepdefacer, mri_deface, mridefacer, pydeface, quickshear), with one skull stripper (FreeSurfer) included for comparison. Accuracy was measured through a pass/fail system with two criteria; one, that all facial features had been removed and two, that no brain tissue was removed in the process. A subset of defaced scans were also run through several preprocessing pipelines to ensure that none of the algorithms would alter the resulting outputs. We found that the success rates varied strongly between defacers, with afni_refacer (89%) and pydeface (83%) having the highest rates, overall. In both cases, the primary source of failure came from a single dataset that the defacer appeared to struggle with - the youngest cohort (3–20 years) for afni_refacer and the oldest (44–85 years) for pydeface, demonstrating that defacer performance not only depends on the data provided, but that this effect varies between algorithms. While there were some very minor differences between the preprocessing results for defaced and original scans, none of these were significant and were within the range of variation between using different NIfTI converters, or using raw DICOM files

    nnResting state fMRI scanner instabilities revealed by longitud inal phantom scans in a multi-center study

    Get PDF
    Quality assurance (QA) is crucial in longitudinal and/or multi-site studies, which involve the collection of data from a group of subjects over time and/or at different locations. It is important to regularly monitor the performance of the scanners over time and at different locations to detect and control for intrinsic differences (e.g., due to manufacturers) and changes in scanner performance (e.g., due to gradual component aging, software and/or hardware upgrades, etc.). As part of the Ontario Neurodegenerative Disease Research Initiative (ONDRI) and the Canadian Biomarker Integration Network in Depression (CAN-BIND), QA phantom scans were conducted approximately monthly for three to four years at 13 sites across Canada with 3T research MRI scanners. QA parameters were calculated for each scan using the functional Biomarker Imaging Research Network\u27s (fBIRN) QA phantom and pipeline to capture between- and within-scanner variability. We also describe a QA protocol to measure the full-width-at-half-maximum (FWHM) of slice-wise point spread functions (PSF), used in conjunction with the fBIRN QA parameters. Variations in image resolution measured by the FWHM are a primary source of variance over time for many sites, as well as between sites and between manufacturers. We also identify an unexpected range of instabilities affecting individual slices in a number of scanners, which may amount to a substantial contribution of unexplained signal variance to their data. Finally, we identify a preliminary preprocessing approach to reduce this variance and/or alleviate the slice anomalies, and in a small human data set show that this change in preprocessing can have a significant impact on seed-based connectivity measurements for some individual subjects. We expect that other fMRI centres will find this approach to identifying and controlling scanner instabilities useful in similar studies

    The CAMH Neuroinformatics Platform: A Hospital-Focused Brain-CODE Implementation

    Get PDF
    Investigations of mental illness have been enriched by the advent and maturation of neuroimaging technologies and the rapid pace and increased affordability of molecular sequencing techniques, however, the increased volume, variety and velocity of research data, presents a considerable technical and analytic challenge to curate, federate and interpret. Aggregation of high-dimensional datasets across brain disorders can increase sample sizes and may help identify underlying causes of brain dysfunction, however, additional barriers exist for effective data harmonization and integration for their combined use in research. To help realize the potential of multi-modal data integration for the study of mental illness, the Centre for Addiction and Mental Health (CAMH) constructed a centralized data capture, visualization and analytics environment—the CAMH Neuroinformatics Platform—based on the Ontario Brain Institute (OBI) Brain-CODE architecture, towards the curation of a standardized, consolidated psychiatric hospital-wide research dataset, directly coupled to high performance computing resources

    Ontario Neurodegenerative Disease Research Initiative (ONDRI): Structural MRI Methods and Outcome Measures

    Get PDF
    The Ontario Neurodegenerative Research Initiative (ONDRI) is a 3 years multi-site prospective cohort study that has acquired comprehensive multiple assessment platform data, including 3T structural MRI, from neurodegenerative patients with Alzheimer\u27s disease, mild cognitive impairment, Parkinson\u27s disease, amyotrophic lateral sclerosis, frontotemporal dementia, and cerebrovascular disease. This heterogeneous cross-section of patients with complex neurodegenerative and neurovascular pathologies pose significant challenges for standard neuroimaging tools. To effectively quantify regional measures of normal and pathological brain tissue volumes, the ONDRI neuroimaging platform implemented a semi-automated MRI processing pipeline that was able to address many of the challenges resulting from this heterogeneity. The purpose of this paper is to serve as a reference and conceptual overview of the comprehensive neuroimaging pipeline used to generate regional brain tissue volumes and neurovascular marker data that will be made publicly available online

    A Comparison of Neuroelectrophysiology Databases

    Full text link
    As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. These archives provide researchers with tools to store, share, and reanalyze neurophysiology data though the means of accomplishing these objectives differ. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. While many tools are available to reanalyze data on and off the archives' platforms, this article features Reproducible Analysis and Visualization of Intracranial EEG (RAVE) toolkit, developed specifically for the analysis of intracranial signal data and integrated with the discussed standards and archives. Neuroelectrophysiology data archives improve how researchers can aggregate, analyze, distribute, and parse these data, which can lead to more significant findings in neuroscience research.Comment: 25 pages, 8 figures, 1 tabl
    • …
    corecore